22 research outputs found

    From photons to big-data applications: terminating terabits

    Get PDF
    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.This work was supported by the UK Engineering and Physical Sciences Research Council Internet Project EP/H040536/1. This work was supported by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, under contract FA8750-11-C-0249

    Personal Data: Thinking Inside the Box

    Get PDF
    We are in a ‘personal data gold rush’ driven by advertising being the primary revenue source for most online companies. These companies accumulate extensive personal data about individuals with minimal concern for us, the subjects of this process. This can cause many harms: privacy infringement, personal and professional embarrassment, restricted access to labour markets, restricted access to highest value pricing, and many others. There is a critical need to provide technologies that enable alternative practices, so that individuals can par- ticipate in the collection, management and consumption of their personal data. In this paper we discuss the Databox, a personal networked device (and associated services) that col- lates and mediates access to personal data, allowing us to re- cover control of our online lives. We hope the Databox is a first step to re-balancing power between us, the data subjects, and the corporations that collect and use our data.Work supported in part by the EU FP7UCN project, grant agreement no 611001

    Towards real-time community detection in large networks

    Full text link
    The recent boom of large-scale Online Social Networks (OSNs) both enables and necessitates the use of parallelisable and scalable computational techniques for their analysis. We examine the problem of real-time community detection and a recently proposed linear time - O(m) on a network with m edges - label propagation or "epidemic" community detection algorithm. We identify characteristics and drawbacks of the algorithm and extend it by incorporating different heuristics to facilitate reliable and multifunctional real-time community detection. With limited computational resources, we employ the algorithm on OSN data with 1 million nodes and about 58 million directed edges. Experiments and benchmarks reveal that the extended algorithm is not only faster but its community detection accuracy is compared favourably over popular modularity-gain optimization algorithms known to suffer from their resolution limits.Comment: 10 pages, 11 figure

    Genomic epidemiology of SARS-CoV-2 in a UK university identifies dynamics of transmission

    Get PDF
    AbstractUnderstanding SARS-CoV-2 transmission in higher education settings is important to limit spread between students, and into at-risk populations. In this study, we sequenced 482 SARS-CoV-2 isolates from the University of Cambridge from 5 October to 6 December 2020. We perform a detailed phylogenetic comparison with 972 isolates from the surrounding community, complemented with epidemiological and contact tracing data, to determine transmission dynamics. We observe limited viral introductions into the university; the majority of student cases were linked to a single genetic cluster, likely following social gatherings at a venue outside the university. We identify considerable onward transmission associated with student accommodation and courses; this was effectively contained using local infection control measures and following a national lockdown. Transmission clusters were largely segregated within the university or the community. Our study highlights key determinants of SARS-CoV-2 transmission and effective interventions in a higher education setting that will inform public health policy during pandemics.</jats:p

    Network text editor (NTE)

    No full text

    Promoting tolerance for delay tolerant network research

    No full text

    Revisiting legacy high-speed TCP congestion control variants: An optimisation-theoretic analysis of multi-mode TCP

    No full text
    We revisit the problem of link capacity under-utilisation in TCP Congestion Control (TCP-CC) when working in High-Bandwidth-Delay-Product (High-BDP) networks. We approach this problem using a multi-mode approach and propose TCP-Gentle as an example of TCP-CC that uses this approach. While General Additive Increase Multiplicative Increase (GAIMD) congestion control algorithms received a lot of attention in the literature, little was mentioned about modelling multi-mode GAIMD. To this aim, we provide a tractable optimisation-theoretic model for TCP-Gentle which can be generalised to any multi-mode GAIMD. We show through analysis, simulation, and real-test-bed experiments of single flow, double flow, and single flow with background web traffic, that the proposed TCP-Gentle is competitive with existing TCP variants. Particularly, under certain assumption, TCP-Gentle can outperform TCP-YeAH in terms of fairness to TCP-NewReno. Besides, the proposed TCP-Gentle is more gentle to network; it maintains minimal average queues of less than 1.5% of pipe's BDP, and reassembles to a great extent a highly-concave congestion window

    AN ARCHITECTURAL FRAMEWORK FOR HETEROGENEOUS NETWORKING

    No full text
    Abstract: The growth over the last decade in the use of wireless networking devices has been explosive. Soon many devices will have multiple network interfaces, each with very different characteristics. We believe that a framework that encapsulates the key challenges of heterogeneous networking is required. Like a map clearly helps one to plan a journey, a framework is needed to help us move forward in this unexplored area. The approach taken here is similar to the OSI model in which tightly defined layers are used to specify functionality, allowing a modular approach to the extension of systems and the interchange of their components, whilst providing a model that is more oriented to heterogeneity and mobility.
    corecore